Evaluating the Divergent Auto-Encoder (DIVA) as a Machine Learning Algorithm

نویسندگان

  • Kenneth J. Kurtz
  • Xavier Oyarzabal
چکیده

The divergent auto-encoder (Kurtz, 2007) offers an alternative to the multi-layer perceptron (MLP) for classification learning via back-propagation. The artificial neural network classifies based on its success reconstructing the input features (from shared, reduced dimensionality recodings) in terms of a generative model of each category. Successful simulations of rapid human learning of elemental, non-linearly separable category structures suggest potential in machine learning. In a series of simulation studies using benchmarks problems from the UCI database, the divergent autoencoder showed learning and generalization performance comparable to state-of-the-art algorithms with several major advantages: no evidence of overfitting, low sensitivity to parameter settings, and fast runtimes. Discussion focuses on three issues: (1) for what types of problems is the divergent autoencoder better or worse than leading algorithms; (2) comparison with MLP as the default architecture for classification learning with artificial neural networks; (3) comparison with other (Bayesian) generative methods for classification learning.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Recognition of Sar Target Based on Multilayer Auto-encoder and Snn

Automatic target recognition (ATR) of synthetic aperture radar (SAR) image is investigated. One feature extraction algorithm of SAR image based on multilayer auto-encoder is proposed. The method makes use of a probabilistic neural network, restricted Boltzmann machine (RBM), modeling probability distribution of environment. Through the formation of more expressive multilayer neural network, the...

متن کامل

Image Representation Learning Using Graph Regularized Auto-Encoders

It is an important task to learn a representation for images which has low dimension and preserve the valuable information in original space. At the perspective of manifold, this is conduct by using a series of local invariant mapping. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE...

متن کامل

What regularized auto-encoders learn from the data-generating distribution

What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes th...

متن کامل

Competitive and Penalized Clustering Auto-encoder

Auto-encoders (AE) has been widely applied in different fields of machine learning. However, as a deep model, there are a large amount of learnable parameters in the AE, which would cause over-fitting and slow learning speed in practice. Many researchers have been study the intrinsic structure of AE and showed different useful methods to regularize those parameters. In this paper, we present a ...

متن کامل

Marginalized Denoising Auto-encoders for Nonlinear Representations

Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks. During training, DAEs make many passes over the training dataset and reconstruct it from partial corruption generated from a pre-specified corrupting distribution. This process learns robust representation, though at the expense of requiring many training epochs, i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011